11 research outputs found

    Fixed Price Approximability of the Optimal Gain From Trade

    Get PDF
    Bilateral trade is a fundamental economic scenario comprising a strategically acting buyer and seller, each holding valuations for the item, drawn from publicly known distributions. A mechanism is supposed to facilitate trade between these agents, if such trade is beneficial. It was recently shown that the only mechanisms that are simultaneously DSIC, SBB, and ex-post IR, are fixed price mechanisms, i.e., mechanisms that are parametrised by a price p, and trade occurs if and only if the valuation of the buyer is at least p and the valuation of the seller is at most p. The gain from trade is the increase in welfare that results from applying a mechanism; here we study the gain from trade achievable by fixed price mechanisms. We explore this question for both the bilateral trade setting, and a double auction setting where there are multiple buyers and sellers. We first identify a fixed price mechanism that achieves a gain from trade of at least 2/r times the optimum, where r is the probability that the seller's valuation does not exceed the buyer's valuation. This extends a previous result by McAfee. Subsequently, we improve this approximation factor in an asymptotic sense, by showing that a more sophisticated rule for setting the fixed price results in an expected gain from trade within a factor O(log(1/r)) of the optimal gain from trade. This is asymptotically the best approximation factor possible. Lastly, we extend our study of fixed price mechanisms to the double auction setting defined by a set of multiple i.i.d. unit demand buyers, and i.i.d. unit supply sellers. We present a fixed price mechanism that achieves a gain from trade that achieves for all epsilon > 0 a gain from trade of at least (1-epsilon) times the expected optimal gain from trade with probability 1 - 2/e^{#T epsilon^2 /2}, where #T is the expected number of trades resulting from the double auction

    A Bayesian approach to analyzing phenotype microarray data enables estimation of microbial growth parameters

    Get PDF
    Biolog phenotype microarrays enable simultaneous, high throughput analysis of cell cultures in different environments. The output is high-density time-course data showing redox curves (approximating growth) for each experimental condition. The software provided with the Omnilog incubator/reader summarizes each time-course as a single datum, so most of the information is not used. However, the time courses can be extremely varied and often contain detailed qualitative (shape of curve) and quantitative (values of parameters) information. We present a novel, Bayesian approach to estimating parameters from Phenotype Microarray data, fitting growth models using Markov Chain Monte Carlo methods to enable high throughput estimation of important information, including length of lag phase, maximal ``growth'' rate and maximum output. We find that the Baranyi model for microbial growth is useful for fitting Biolog data. Moreover, we introduce a new growth model that allows for diauxic growth with a lag phase, which is particularly useful where Phenotype Microarrays have been applied to cells grown in complex mixtures of substrates, for example in industrial or biotechnological applications, such as worts in brewing. Our approach provides more useful information from Biolog data than existing, competing methods, and allows for valuable comparisons between data series and across different models

    Revenue maximization for market intermediation with correlated priors

    No full text
    We study the computational challenge faced by a intermediary who attempts to profit from trade with a small number of buyers and sellers of some item. In the version of the problem that we study, the number of buyers and sellers is constant, but their joint distribution of the item’s value may be complicated. We consider discretized distributions, where the complexity parameter is the support size, or number of different prices that may occur. We show that maximizing the expected revenue is computationally tractable (via an LP) if we are allowed to use randomized mechanisms. For the deterministic case, we show how an optimal mechanism can be efficiently computed for the oneseller/one-buyer case, but give a contrasting NP-completeness result for the one-seller/two-buyer case

    Revenue maximization for market intermediation with correlated priors

    No full text
    We study the computational challenge faced by a intermediary who attempts to profit from trade with a small number of buyers and sellers of some item. In the version of the problem that we study, the number of buyers and sellers is constant, but their joint distribution of the item’s value may be complicated. We consider discretized distributions, where the complexity parameter is the support size, or number of different prices that may occur. We show that maximizing the expected revenue is computationally tractable (via an LP) if we are allowed to use randomized mechanisms. For the deterministic case, we show how an optimal mechanism can be efficiently computed for the oneseller/one-buyer case, but give a contrasting NP-completeness result for the one-seller/two-buyer case

    Multi-unit bilateral trade

    No full text
    We characterise the set of dominant strategy incentive compatible (DSIC), strongly budget balanced (SBB), and ex-post individually rational (IR) mechanisms for the multi-unit bilateral trade setting. In such a setting there is a single buyer and a single seller who holds a finite number k of identical items. The mechanism has to decide how many units of the item are transferred from the seller to the buyer and how much money is transferred from the buyer to the seller. We consider two classes of valuation functions for the buyer and seller: Valuations that are increasing in the number of units in possession, and the more specific class of valuations that are increasing and submodular. Furthermore, we present some approximation results about the performance of certain such mechanisms, in terms of social welfare: For increasing submodular valuation functions, we show the existence of a deterministic 2-approximation mechanism and a randomised e/(1 − e) approximation mechanism, matching the best known bounds for the single-item setting

    Multi-unit bilateral trade

    No full text
    We characterise the set of dominant strategy incentive compatible (DSIC), strongly budget balanced (SBB), and ex-post individually rational (IR) mechanisms for the multi-unit bilateral trade setting. In such a setting there is a single buyer and a single seller who holds a finite number k of identical items. The mechanism has to decide how many units of the item are transferred from the seller to the buyer and how much money is transferred from the buyer to the seller. We consider two classes of valuation functions for the buyer and seller: Valuations that are increasing in the number of units in possession, and the more specific class of valuations that are increasing and submodular. Furthermore, we present some approximation results about the performance of certain such mechanisms, in terms of social welfare: For increasing submodular valuation functions, we show the existence of a deterministic 2-approximation mechanism and a randomised e/(1 − e) approximation mechanism, matching the best known bounds for the single-item setting

    Associations of event-related brain potentials and Alzheimer’s disease severity: a longitudinal study

    No full text
    Background: So far, no cost-efficient, widely-used biomarkers have been established to facilitate the objectivization of Alzheimer’s disease (AD) diagnosis and monitoring. Research suggests that event-related potentials (ERPs) reflect neurodegenerative processes in AD and might qualify as neurophysiological AD markers. Objectives: First, to examine which ERP component correlates the most with AD severity, as measured by the Mini-Mental State Examination (MMSE). Then, to analyze the temporal change of this component as AD progresses. Methods: Sixty-three subjects (31 with possible, 32 with probable AD diagnosis) were recruited as part of the cohort study Prospective Dementia Registry Austria (PRODEM). For a maximum of 18 months patients revisited every 6 months for follow-up assessments. ERPs were elicited using an auditory oddball paradigm. P300 and N200 latency was determined with regard to target as well as difference wave ERPs, whereas P50 amplitude was measured from standard stimuli waveforms. Results: P300 latency exhibited the strongest association with AD severity (e.g., r = –0.512, p Conclusions: P300 and N200 latency significantly correlated with disease severity in probable AD, whereas P50 amplitude did not. P300 latency, which showed the highest correlation coefficients with MMSE, significantly increased over the course of the 18 months study period in probable AD patients. The magnitude of the observed prolongation is in line with other longitudinal AD studies and substantially higher than in normal ageing, as reported in previous trials (no healthy controls were included in our study).</br

    Bayesian Gaussian process classification from event-related brain potentials in Alzheimer’s disease

    No full text
    Event-related potentials (ERPs) have been shown to reflect neurodegenerative processes in Alzheimer’s disease (AD) and might qualify as non-invasive and cost-effective markers to facilitate the objectivization of AD assessment in daily clinical practice. Lately, the combination of multivariate pattern analysis (MVPA) and Gaussian process classification (GPC) has gained interest in the neuroscientific community. Here, we demonstrate how a MVPAGPC approach can be applied to electrophysiological data. Furthermore, in order to account for the temporal information of ERPs, we develop a novel method that integrates interregional synchrony of ERP time signatures. By using real-life ERP recordings of a prospective AD cohort study (PRODEM), we empirically investigate the usefulness of the proposed framework to build neurophysiological markers for single subject classification tasks. GPC outperforms the probabilistic reference method in both tasks, with the highest AUC overall (0.802) being achieved using the new spatiotemporal method in the prediction of rapid cognitive decline

    Bayesian Gaussian process classification from event-related brain potentials in Alzheimer’s disease

    No full text
    Event-related potentials (ERPs) have been shown to reflect neurodegenerative processes in Alzheimer’s disease (AD) and might qualify as non-invasive and cost-effective markers to facilitate the objectivization of AD assessment in daily clinical practice. Lately, the combination of multivariate pattern analysis (MVPA) and Gaussian process classification (GPC) has gained interest in the neuroscientific community. Here, we demonstrate how a MVPAGPC approach can be applied to electrophysiological data. Furthermore, in order to account for the temporal information of ERPs, we develop a novel method that integrates interregional synchrony of ERP time signatures. By using real-life ERP recordings of a prospective AD cohort study (PRODEM), we empirically investigate the usefulness of the proposed framework to build neurophysiological markers for single subject classification tasks. GPC outperforms the probabilistic reference method in both tasks, with the highest AUC overall (0.802) being achieved using the new spatiotemporal method in the prediction of rapid cognitive decline
    corecore